Journal of Neurology
○ Springer Science and Business Media LLC
Preprints posted in the last 30 days, ranked by how well they match Journal of Neurology's content profile, based on 26 papers previously published here. The average preprint has a 0.04% match score for this journal, so anything above that is already an above-average fit.
Yamagata, N.; Kimura, Y.; Matsui, H.; Yasunaga, H.
Show abstract
Background: Clinical evidence on the contemporary management and functional outcomes of patients with Wernicke encephalopathy remains limited. This study aimed to clarify the nationwide patterns of thiamine administration and functional outcomes at discharge. Methods: Using the Japanese nationwide inpatient Diagnosis Procedure Combination database, we identified patients hospitalized with Wernicke encephalopathy between July 2010 and March 2024. Initial intravenous thiamine doses were categorized as low ([≤]300 mg/day), medium (301-900 mg/day), or high (>900 mg/day). Outcomes included in-hospital mortality and functional status (Barthel Index) at discharge. Results: We identified 7856 patients with Wernicke encephalopathy. Over the 13-year study period, the proportion of patients receiving initial high-dose thiamine increased markedly from 5.4% to 49.0%, while the frequency of low-dose therapy decreased from 83.0% to 37.9%. Despite prompt intervention [median time to initial administration: 0 days (interquartile range, 0 to 0 days)], 56.1% of patients were discharged with impaired activities of daily living (Barthel Index <90), and the in-hospital mortality rate was 3.8%. Conclusions: High-dose thiamine treatment is increasingly implemented for Wernicke encephalopathy in Japan. Although in-hospital mortality was relatively low, the high prevalence of functional impairment at discharge, despite early treatment initiation, indicates substantial burden of Wernicke encephalopathy. Given the limited clinical evidence, further research investigating the optimal thiamine dose and develop effective primary prevention strategies for Wernicke encephalopathy is needed.
Ludolph, A. C.; Heiman-Patterson, T.; Mora, J. S.; Rodriguez, G.; Bohorquez Morera, N.; Vermersch, P.; Moussy, A.; Mansfield, C.; Hermine, O.
Show abstract
Introduction: Amyotrophic lateral sclerosis (ALS) is a progressive neurodegenerative disease with limited treatment options. Masitinib, a tyrosine kinase inhibitor targeting microglial and mast cell activity in ALS pathogenesis, offers potential neuroprotection. This study presents a post-hoc analysis of long-term survivors treated with masitinib at 4.5 mg/kg/day in study AB10015, comparing observed survival to predicted and historical benchmarks. Methods: Study AB10015 was a randomized, double-blind, placebo-controlled trial assessing masitinib with riluzole in ALS patients. Overall survival (OS) was measured from symptom onset to death, encompassing the double-blind period and post-study follow-up, including an optional open-label program. The ENCALS model predicted survival of long-term survivors ([≥]5 years). A delay in the need for mechanical assistance, such as permanent ventilation, gastrostomy, tracheostomy, or wheelchair dependence, was used as a surrogate measure for quality of life (QoL). Results: Among 130 patients receiving masitinib 4.5 mg/kg/day, the 5-year survival rate from onset was 42.3%, increasing to 50.0% in patients with an ALSFRS-R progression rate from disease onset of <1.1 points/month (AB10015 primary efficacy population), and 52.9% in a subgroup of patients without complete loss of functionality at baseline. Half of the long-term survivors had satisfactory QoL, defined as no mechanical assistance. The median OS for long-term survivors (n=55) was 121 months versus the ENCALS-predicted 42 months, yielding a 79-month residual median survival gain. Long-term survivors were prevalent across ALS baseline prognostic factors, including slow or moderate disease progression rate ({Delta}FS), severe or moderate functional severity, bulbar or spinal site of onset, respiratory function, and age. Long-term survival was less likely in patients with complete loss of function at baseline or fast progressing disease ({Delta}FS [≥]1.1 points/month) at baseline. Conclusions: Masitinib treatment in ALS patients showed substantial survival benefit. Long-term survivors were largely independent of ALS prognostic factors, suggesting a subpopulation driven by microglial/mast cell activity. A recently identified biomarker detecting masitinib effect on pro-inflammatory microglia may help identify responsive patients.
Ademi, M.; Morales Saute, J. A.; Dubec-Fleury, C.; Greenfield, J.; Wallis, R.; Gobeil, C.; Linton, L. R.; Nadke, A.; Horvath, R.; Klebe, S.; Santorelli, F.; Vural, A.; van de Warrenburg, B.; Gagnon, C.; Synofzik, M.; PROSPAX Consortium, ; Tezenas du Montcel, S.; Schuele, R.
Show abstract
As therapeutic options emerge for hereditary spastic paraplegias (HSP), clinical trials require outcome measures that reflect disease aspects most important to patients. Patient priorities in HSP remain poorly defined. This study aimed to develop a regulatory-compliant framework of patient-prioritised health domains to evaluate treatment response in clinical trials. Patient-reported data on health impacts were collected via two multinational, multilingual online surveys conducted sequentially, including 616 and 504 patients across the clinical and genetic spectrum of HSP. Using a staged approach, we examined prevalence, relevance, and severity, focusing on health impacts that were (i) common (ii) sensitive to disease progression, (iii) highly relevant to patients, and (iv) showed strong severity-relevance correlation. Patient representatives contributed centrally to study design and prioritisation. Our patient-focused analysis yielded five highly prevalent and relevant core health domains: mobility, lower body function, autonomic dysregulation, pain, and psychosocial aspects. Ambulation and lower body function ranked highest across all disease stages. Among non-motor impacts, reduced ability to work, bladder incontinence, and fatigue were most relevant. In mild disease stages, reduced walking distance, reduced walking speed, and the urgency to empty the bladder were the most frequent and most relevant health impact. This work provides the most comprehensive patient-reported and disease stage specific profiling of HSP health impacts to date. It lays the necessary groundwork for developing patient-focused outcome tools capable of capturing treatment effects in future trials.
Glenn, T.; Bilodeau, P.; Ali, A.; Bhattacharyya, S.
Show abstract
Background: Acute treatments for patients with spinal cord strokes (SCS), including lumbar drain, blood pressure augmentation, corticosteroids, antiplatelets, and anticoagulants, are largely extrapolated from literature on cerebral infarcts or based on suspected SCS physiology. This study adds to the knowledge of symptomatology and management of SCS. Methods: This retrospective cohort study included patients from one medical system from 2000-2025. Multivariate ordinal logistic regressions were performed to evaluate associations of SCS treatments with the primary outcome of ambulatory status (independently ambulatory, ambulatory with assistance, non-ambulatory) at first follow-up, as well as secondary outcomes of modified Rankin Scale (mRS) and modified Japanese Orthopedic Association (mJOA) scores. SCS severity by American Spinal Injury Association impairment scale (AIS) with grade A as the comparator, age, sex, and whether SCS was spontaneous/periprocedural were covariates. Odds ratios (OR) greater than 1 were associated with better ambulatory status, lower mRS, and higher mJOA. Results: 130 SCS patients were included. Median age at SCS onset was 62 years, 42% were female, and 39% were periprocedural. Median first follow-up was 57 days. AIS grade was A for 28%, B for 25%, C for 28%, and D for 26%. SCS severity had significant associations with outcomes. For ambulatory status, AIS B OR 2.78, 95% CI 1.03-7.69, p-value 0.045; AIS C OR 16.7, 95% CI 5.56-50.0, p-value <0.01; AIS D OR 125, 95% CI 33.3-500, p-value <0.01. Corticosteroids were associated with improved ambulatory status and mJOA at follow-up (OR 2.38, 95% CI 1.15-5, p-value 0.023 and OR 2.27, 95% CI 1.09-4.76, p-value 0.030, respectively). No treatment had a significant association with mRS. Conclusion: Initial SCS severity had the strongest association with outcomes. Corticosteroids were associated with a better ambulatory status and mJOA. This study can help guide clinician management of patients with SCS.
Kmiecik, M. J.; O'Brien, L.; Szpyhulsky, M.; Iodice, V.; Freeman, R.; Jordan, J.; Biaggioni, I.; Kaufmann, H.; Vickery, R.; Miller, A.; Saunders, E.; Rushton, E.; Valle, L.; Norcliffe-Kaufmann, L.
Show abstract
BackgroundAlthough neurogenic orthostatic hypotension (nOH) is a common and debilitating feature of multiple system atrophy (MSA), little is known about the burden of symptoms in the real world. ObjectivesTo design and conduct a cross-sectional community-based research survey targeting patients with MSA, with and without nOH. MethodsWe recruited patients with MSA to complete an anonymous online survey covering three core themes: 1) timely diagnosis, 2) nOH pharmacotherapy and refractory symptoms, and 3) confidence in physician knowledge. Responses were grouped by pre-specified diagnostic certainty levels. Relationships between symptoms, function, and pharmacotherapy were assessed using univariate and multivariate methods. ResultsWe analyzed 259 respondents with a self-reported diagnosis of MSA (age: M=64.38, SD=8.09 years; 44% female). In total, 42% also had a diagnosis nOH; 40% had symptoms highly suspicious of nOH, but no diagnosis; and 21% reported having never had their blood pressure measured in the standing position at a clinical visit. Treatment with a pressor agent was independently associated with the presence of other symptoms of autonomic failure. Each additional nOH symptom reported increased the odds of requiring pharmacotherapy by 18%. Yet, despite anti-hypotensive medication use, 97% of patients reported limitations in their ability to bathe, cook, or arise from a chair/bed with 76% needing caregiver support for refractory nOH symptoms. ConclusionsThis cross-sectional representative sample shows nOH is underrecognized and undertreated in MSA patients, leading to substantial functional limitations. It is our hope that these findings are leveraged for planning future trials and advocating for better treatments.
Burnell, M.; Gonzalez-Robles, C.; Zeissler, M.-L.; Bartlett, M.; Clarke, C. S.; Counsell, C.; Hu, M. T.; Foltynie, T.; Carroll, C.; Lawton, M.; Ben-Shlomo, Y.; Carpenter, J.
Show abstract
Background: Most trials of Parkinson's disease (PD) measure progression over a short to medium time-period using continuous rating scales that may be hard to interpret and less meaningful for patients. There is a lack of evidence connecting changes in these scales to changes in outcomes important to patients. Objectives: We present causal modelling to translate the causal, short-term disease-modifying treatment effects on functional rating scales to the 10-year risk of serious clinical progression milestones. Methods: We selected four important clinical milestones of disease progression from the Oxford Parkinson's Disease Centre "Discovery" cohort: dementia, any falls, frequent falls, and mortality. We proposed a causal framework for our research objectives so we could model the potential impact of a 30% reduction in disease progression slopes ("treatment effect") using the summation of parts I and II of the Movement Disorders Society Unified Parkinson's Disease Rating Scale (UPDRS12). This outcome was regressed on time to milestone using flexible parametric survival models. Marginal predictions of survival and survival difference at year 10 were then calculated for the Discovery cohort, and a counterfactual cohort applying the treatment effect to estimate the relative and absolute reductions for the four clinical milestones. Results: The model increase in risk for each unit change in the UPDRS12 were as follows: dementia hazard ratio (HR)=1.52 (95% Confidence Interval (CI) 1.36-1.70), any falls HR=1.37 (95% CI 1.29-1.46), frequent falls HR=1.68 (95% CI 1.49-1.89), mortality=1.29 (95% CI 1.17-1.42). These models led to marginal predictions of absolute reductions, when the progression was reduced by 30%, between 4.0% (mortality) and 7.5% (frequent falls) at 10 years follow up. Conclusions: We have demonstrated how a treatment effect in a trial specified in terms of a progression change of a rating scale can be contextualised into a long-term reduction in the probability of clinically relevant milestones. Whilst we have used PD as our exemplar, we believe this methodological approach is generalisable to other chronic progressive diseases where trials are often limited to a relatively short follow-up period and use some scalar measure of progression, but significant clinical milestones usually take longer to be observed. Keywords: Clinical trials; disease modifying therapies; causal estimation; prediction models
Loehrer, P. A.; Witt, L.; Nagel, M.; Chen, L.; Calvano, A.; Bopp, M. H. A.; Rizos, A.; Hillmeier, M.; Wichmann, J.; Nimsky, C.; Chaudhuri, K. R.; Dafsari, H. S.; Timmermann, L.; Pedrosa, D. J.; Belke, M.
Show abstract
BackgroundSubthalamic deep brain stimulation (STN-DBS) represents an established therapeutic intervention for advanced Parkinsons disease (PD), alleviating motor and non-motor symptoms. However, impulse control disorders (ICDs) present a complex challenge, with some patients experiencing postoperative improvements while others develop treatment induced impulsive-compulsive behaviours (ICB). The mechanisms determining these variable outcomes remain poorly understood, highlighting the need to predict postoperative ICB outcomes. MethodsThis prospective open-label study aimed to identify microstructural markers associated with postoperative changes in impulsive-compulsive behaviour following STN-DBS. Thirty-five patients underwent diffusion MRI and clinical evaluations preoperatively and six months postoperatively. A whole-brain voxel-wise analysis utilising diffusion tensor imaging (DTI) and neurite orientation dispersion and density imaging (NODDI) was conducted to explore associations between microstructural metrics and changes in the Questionnaire for Impulsive-Compulsive Disorders in Parkinsons Disease-Rating Scale (QUIP-RS). ResultsIntact microstructure in frontolimbic WM tracts, including the cingulum, insular cortex connections, and major association fibres, was associated with greater postoperative reductions in impulsive-compulsive symptoms. Conversely, intact microstructure in specific grey matter areas including paracingulate gyrus, insular cortex, and precentral gyrus were associated with lower reductions or increases in postoperative ICB. ConclusionThese findings demonstrate that preoperative microstructural integrity within frontolimbic circuits and executive control networks associates with susceptibility to treatment-emergent impulsive-compulsive behaviours following STN-DBS. The convergent evidence from multiple diffusion metrics suggests that diffusion MRI may serve as a valuable tool for identifying patients at risk for developing ICB, potentially enhancing preoperative counselling and enabling targeted behavioural monitoring strategies.
Hamou, H.; Kernbach, J.; Ridwan, H.; Fay-Rodrian, K.; Clusmann, H.; Hoellig, A.; Veldeman, M.
Show abstract
Background Chronic subdural hematoma (cSDH) recurrence requiring reoperation occurs in 5-33% of cases, representing a substantial clinical and economic burden. The ability to predict recurrence could enable risk-stratified surveillance protocols, potentially reducing imaging burden in low-risk patients while maintaining close monitoring for high-risk individuals. We evaluated whether machine learning algorithms could achieve clinically actionable recurrence prediction using routinely available clinical and radiographic variables. Methods This retrospective single-center study included 564 consecutive patients who underwent surgical evacuation of cSDH between 2015 and 2023. Data were randomly divided into training (75%, n=422) and test (25%, n=142) sets. We developed and compared three machine learning models--regularized logistic regression, Random Forest, and XGBoost--using 31 predictor variables including demographics, comorbidities, medications, laboratory values, hematoma characteristics, and postoperative features. Model development and hyperparameter tuning were performed exclusively on the training set using 10-fold cross-validation. The best-performing model was selected and evaluated on the held-out test set. The primary outcome was postoperative recurrence requiring reoperation. Results Postoperative recurrence occurred in 170 patients (30.1%). Within the training set, XGBoost achieved the highest cross-validated ROC AUC of 0.713 (SE=0.024), outperforming regularized logistic regression (0.686) and matching Random Forest (0.713). Variable importance analysis identified hematoma volume, coagulation parameters (INR, platelets, aPTT), and disease severity markers (ICU admission, GCS) as the most influential predictors, though absolute effect sizes remained modest. On the held-out test set, the final XGBoost model achieved ROC AUC 0.688 (95% CI: 0.590-0.772) with excellent calibration. However, at the clinically relevant 90% sensitivity threshold, test set specificity was only 30.3%, allowing potential imaging reduction in approximately one-third of non-recurrence patients. The consistency between training and test performance confirmed that limitations stem from inherent predictor information content rather than overfitting. Conclusions Machine learning models using routinely available clinical and radiographic variables cannot achieve clinically actionable risk stratification for cSDH recurrence. Despite rigorous methodology and internal validation, discriminative capacity remained insufficient to identify a low-risk patient subgroup suitable for de-escalated surveillance. These findings suggest that recurrence is driven by factors not captured in standard clinical assessment, and support either uniform surveillance protocols or symptom-driven imaging strategies rather than risk-stratified approaches.
Thibault, S.; Williamson, R.; Wong, A. L.; Buxbaum, L. J.
Show abstract
Many individuals with limb apraxia after left-hemisphere stroke exhibit a lack of awareness of their tool-related action errors, i.e., unawareness of apraxia (UA; also called anosognosia of apraxia). Little is known about the prevalence of UA, the relationship between UA and apraxia severity, or its underlying mechanisms. Here, we assessed both the causes and consequences of UA. Based on a mechanistic model, we hypothesized that UA may arise because of deficits in representations signaling how tool-related movements should look and feel--a component of action knowledge--and that degradation of this knowledge impedes the detection of mismatches between planned and actual tool-related actions. We further predicted that a consequence of UA is a reduction in error-correction attempts. Fifty-six individuals with chronic LCVA gestured to show how to use tools. Immediately after the gesture production task, participants were asked if they made any errors. All participants also completed an action knowledge task to measure the integrity of tool-related movement goals. Individuals were denoted as exhibiting UA if they performed below a normative cutoff for apraxia yet reported making no errors. Our sample included 21 individuals with apraxia; of these, nearly half (48%) exhibited UA. These two groups made a comparable number of gesture errors and were of equivalent stroke severity, yet individuals with UA had significantly more impaired action knowledge. Additionally, individuals with UA were less likely to attempt to correct their errors compared to individuals who were aware of their apraxia. These data support the hypothesis that action knowledge (how tool actions look and feel) serves a key role in error detection and awareness of apraxia and may contribute to the difficulties with everyday tasks experienced by many people with apraxia.
Ledingham, D.; Sathyanarayana, S.; Iredale, R.; Stewart, C. B.; Foster, V.; Galley, D.; Baker, M. R.; Pavese, N.
Show abstract
Background: Historically, OFF burden in Parkinsons disease has been primarily attributed to motor features. Recent studies highlight that non motor symptoms, and the predictability of OFF episodes also drive functional impairment, yet they are rarely measured in clinical practice. Objective: To identify which clinical features are most closely associated with OFF time and OFF impact, and to quantify the added explanatory value of temporal predictability, non-motor, and behavioural domains beyond a core motor model. Methods: We analysed 1,252 OFF only visits from 430 PPMI participants. Outcomes were MDS UPDRS IV 4.3 (OFF time) and 4.4 (OFF impact). Linear mixed effects models with a participant random intercept were fitted. The core motor model included OFF state motor severity, freezing, tremor, levodopa responsiveness, and dyskinesia, plus covariates. Predictability (IV; 4.5), non motor (mood, fatigue/sleep, autonomic/GI), and behavioural (impulse control behaviours) domains were then added to assess added influence beyond motor. Analyses were stratified by time since diagnosis (Pooled; [≤] 4y; [≥] 6y). Results: Clinical features explained more variance in OFF impact than OFF time (25.9% vs 8.1%). OFF time was primarily linked to OFF state motor severity/freezing, with levodopa responsiveness important early. For OFF impact, predictability produced the largest increment in marginal R squared beyond the core motor model (pooled and Late). Within the core motor model, tremor was the largest contributor to OFF impact. Conclusions: Predictability is a prominent correlate of OFF impact. Asking about predictability may help tailor therapy, from timing optimisation to on demand rescue for unpredictable episodes.
Calame, D. G.; Wiener, E.; Gavazzi, F.; Sevagamoorthy, A.; Pizzino, A.; Arnold, K.; Gonzalez, C. D.; Jammihal, T.; Bennett, M.; Adang, L.; Woidill, S.; Whitehead, M. T.; Vossough, A.; D'Aiello, R.; Takanohashi, A.; Lele, J.; Simons, C.; Rius, R.; Formaini, E.; Sullivan, K. E.; Andzelm, M.; Ebrahimi-Fakhari, D.; Otten, C.; Wong, S.; Reynolds, T.; Schiffmann, R.; Wolf, N. I.; Waisfisz, Q.; Niermeijer, J.-M.; DeMarzo, D.; Dawood, M.; Gandhi, M.; Levine, J. M.; Chinn, I. K.; Fisher, K.; Emrick, L.; Al Alam, C.; Kaiyrzhanov, R.; Maroofian, R.; Houlden, H.; Jhangiani, S. N.; Mehta, H. H.; Muzny, D.
Show abstract
Purpose: Aicardi-Goutieres syndrome (AGS) is a type I interferonopathy presently associated with nine genes. PTPN1 is a negative regulator of the interferon pathway previously associated with chronic inflammation and recently type 1 IFN autoinflammation. Methods: Genomic data from undiagnosed individuals with suspected AGS were interrogated for PTPN1 variants, and predicted loss-of-function (pLOF) and damaging missense variants in PTPN1 were sought in two additional academic databases as well as the All of Us database. Results: We identified 13 cases with ultra-rare heterozygous pLOF or highly damaging missense variants in PTPN1. Nine cases were identified in a cohort of 53 individuals (~ 17%) with clinical, imaging and persistent biochemical features of AGS. Median age of onset is 1.75 years (IQR 0.67), significantly later (p< 0.0001) than other AGS genotypes. Four additional cases were identified in academic datasets with variable clinical features suggestive of autoinflammation. Additionally, 49 individuals with ultra-rare, damaging PTPN1 variants were identified in the All of Us database, none had features suggestive of AGS, but autoimmunity was highly prevalent (~21.6%). Conclusion: Our data implicate PTPN1 as a cause of later-onset presentations of AGS within a broader spectrum of autoinflammatory phenotypes. Segregation and biobank data demonstrate reduced penetrance, with carriers being enriched for autoimmune disorders.
Hausmann, A. C.; Querbach, S. K.; Rubbert, C.; Schnitzler, A.; Caspers, J.; Hartmann, C. J.
Show abstract
Background: Neurite orientation dispersion and density imaging (NODDI) shows promise in providing specific insights into the neurite morphology underlying white matter (WM) damage in neurodegenerative diseases. This study aimed to advance the currently limited knowledge by characterizing NODDI-derived microstructural WM alterations in Wilson disease (WD) and examining their relationships with clinical symptoms. Methods: 30 WD patients, including 19 with predominant neurological involvement (neuro-WD) and 11 with hepatic manifestation (hep-WD), and 30 matched healthy controls underwent multi-shell diffusion-weighted magnetic resonance imaging. NODDI metrics, including neurite density index (NDI), orientation dispersion index (ODI), and isotropic volume fraction (ISOVF), and diffusion tensor imaging-based fractional anisotropy (FA) were estimated. Group differences in diffusion parameters across the WM skeleton were determined using tract-based spatial statistics. Additionally, voxel-wise correlations with neurological and cognitive scores were investigated. Results: We observed widespread NDI and ODI reductions in neuro-WD patients and ISOVF increases in hep-WD patients compared with healthy controls, particularly involving the corpus callosum, corona radiata, superior longitudinal fasciculus, external and internal capsule, and superior fronto-occipital fasciculus. A comparable yet more subtle pattern was found when comparing phenotypes. Distinct NDI and ODI constellations were identified as the microstructural determinants of FA alterations. Decreased NDI in the aforementioned fibers were correlated with neurological impairment, processing speed, and visual attention. Conclusions: Phenotype-specific microstructural WM alterations were identified, characterized by globally reduced axonal density and fiber organization in neuro-WD and excess free water in hep-WD. NODDI could be useful as an imaging biomarker for forecasting conversion to neurological WD manifestations and monitoring of disease progression.
Coughlin, D.; Gochanour, C.; Yin, J.; Concha-Marambio, L.; Farris, C.; Ma, Y.; Lafontant, D.-E.; Jabbari, E.; Simuni, T.; Marek, K.; Tropea, T.
Show abstract
Studies reporting alpha-synuclein seed amplification assay (aSyn-SAA) results are often cross-sectional. Here we investigated the intra-individual consistency of aSyn-SAA results over time from participants in the Parkinson's Progression Marker Initiative (PPMI). A total of 1238 participants had >1 CSF aSyn-SAA result for analysis (Parkinson's disease [PD]=633, prodromal =563, healthy control [HC]=42) which were collected over a median (min, max) of 2.0 (0.4, 11.4) years. Emphasis was placed on evaluating consistency in less common results such as aSyn-SAA- PD participants, aSyn-SAA+ HC and conversion rates from aSyn-SAA negative to positive results prodromal participants. Of aSyn-SAA+ PD participants, 96% (474/493, 95%CI 94-98%) remained positive in subsequent samples, and 92% (116/126, 95%CI 86-96%) of aSyn-SAA- PD participants remained negative. 99% (303/307, 95%CI 97-99%) of aSyn-SAA+ prodromal participants remained positive, and 95% (234/247, 95%CI 91-97%) of aSyn-SAA- prodromal participants remained negative. 89% (16/18, 95%CI 67-97%) of aSyn-SAA+ HC participants remained positive, and 87% (20/23, 95%CI 68-95%) of aSyn-SAA- HC participants remained negative. These results confirm a high consistency of aSyn-SAA results over time, even in less expected results.
Robertson, J. W.; Adanyeguh, I.; Ashizawa, T.; Bender, B.; Cendes, F.; Coarelli, G.; Deistung, A.; Diciotti, S.; Durr, A.; Faber, J.; Franca, M. C.; Goricke, S. L.; Grisoli, M.; Joers, J. M.; Klockgether, T.; Lenglet, C.; Mariotti, C.; Martinez, A. R.; Marzi, C.; Mascalchi, M.; Nigri, A.; Oz, G.; Paulson, H.; Rakowicz, M. J.; Reetz, K.; Rezende, T. J.; Sarro, L.; Schols, L.; Synofzik, M.; Timmann, D.; Thomopoulos, S. I.; Thompson, P. M.; van de Warrenburg, B.; Hernandez-Castillo, C. R.; Harding, I. H.
Show abstract
Objective: Spinocerebellar ataxia type 1 (SCA1) is a rare, inherited neurodegenerative disease characterised by progressive deterioration of motor and cognitive function. Here, we illustrate the pattern and evolution of brain atrophy in people with SCA1 using a large multisite dataset. Methods: Structural magnetic resonance imaging data from SCA1 (n=152) and healthy control (n=131) participants from seven sites and two consortia were analyzed using voxel-based morphometry. Cross-sectional stratification and correlations were undertaken with ataxia severity and duration to profile disease evolution. Cerebrocerebellar structural covariance analysis was used to understand the relationship between cerebral and cerebellar tissue atrophy. Results: Atrophy in SCA1 first manifests in the lower brainstem and cerebellar white matter (WM), before progressing to the pons, anterior cerebellum, and cerebellar lobule IX. The midbrain and peri-thalamic WM and the remainder of the cerebellar cortex are then affected, with preferential involvement of specific motor and cognitive areas. Finally, degeneration in the striatum and cerebral WM corresponding to the corticospinal tract become apparent. Atrophy and correlations with ataxia severity are most pronounced in the cerebellar WM and pons. Structural covariance analysis showed reduced correlations between cerebellar and cerebral WM volume in SCA1 participants. Interpretation: Cross-sectional stratification of a large SCA1 cohort by ataxia severity indicates a pattern of atrophy spread across the brainstem, cerebellum, and subcortical grey and white matter. Ongoing volume loss throughout the disease course is most evident in a core set of infra-tentorial brain regions. Atrophy of cerebellum spans both motor and cognitive functional zones. Cerebellar degeneration is not directly mirrored by downstream effects in the cerebrum.
Kurtz, J.; Billot, A.; Falconer, I.; Small, H.; Charidimou, A.; Kiran, S.; Varkanitsa, M.
Show abstract
BackgroundTheory of Mind (ToM) deficits are well-documented in right-hemisphere stroke but remain understudied in post-stroke aphasia. Prior work suggests that performance on tasks assessing ToM may be relatively preserved in aphasia and dissociable from language impairment, but these findings are based largely on small studies. This study examined performance on nonverbal false-belief tasks in post-stroke aphasia, its relationship with aphasia severity, and whether vascular brain health, operationalized using cerebral small vessel disease (CSVD) markers, contributed to variability in performance. MethodsForty-four individuals with aphasia completed two nonverbal belief-reasoning tasks assessing spontaneous perspective-taking and self-perspective inhibition. Task accuracy served as the primary outcome. Linear regression models examined associations between task performance, aphasia severity (Western Aphasia Battery-Revised Aphasia Quotient), and CSVD markers, including white matter hyperintensities, cerebral microbleeds, lacunes and enlarged perivascular spaces in the basal ganglia and centrum semiovale. ResultsPerformance was heterogeneous across tasks, with reduced performance observed in 23% of participants on the Reality-Unknown task and 36% on the Reality-Known task. Aphasia severity was not associated with task accuracy. Greater cerebral microbleed count was associated with lower accuracy on both tasks, while greater basal ganglia enlarged perivascular spaces burden showed a more selective association with lower performance. ConclusionsPerformance on nonverbal false-belief tasks in aphasia is variable and not explained by aphasia severity alone. These findings suggest that apparent ToM-related difficulties in aphasia may be shaped by broader vascular brain health, supporting a more multidimensional framework for interpreting social-cognitive task performance after stroke.
Bisteau, X.; Bastide, L.; Imbault, V.; Perrotta, G.; Borrelli, S.; Elands, S.; van Pesch, V.; Borras, E.; Sabido, E.; Gaspard, N.; Communi, D.
Show abstract
Despite important advances in understanding the etiopathology of multiple sclerosis, factors determining disease progression remain partially understood and often difficult to predict. Specific diagnostic and prognostic biomarkers are needed to optimize the risk-benefit ratio of treatment for each patient. The aim of our study was to identify a cerebrospinal fluid proteomic signature associated with diagnosis and short- to mid-term prognosis across the multiple sclerosis continuum. Our multicentric cohort study analyzed CSF samples from 120 patients using a proteomics data-independent acquisition strategy. Differentially expressed proteins were identified across diagnostic groups: 62 patients with multiple sclerosis, 15 patients with clinically isolated syndrome, and 43 healthy controls. We also compared the CSF of patients with no evidence of disease activity with those with disease activity at 2 and 5 years of follow-up. A diagnostic and prognostic classification model was built using iterative cross-validated logistic regression models on shared differentially expressed proteins across these two comparisons. A total of 1,257 proteins were quantified, and 162 differentially expressed proteins were identified across comparisons. We identified a set of ten proteins associated with the diagnosis and prognosis of multiple sclerosis, including previously identified potential biomarkers (CH3L2, IGHG1, IGKC, LAMP2, ADA2), proteins known to be involved in the pathophysiology of multiple sclerosis (A0A8J8YUT9, AT2A2, CO3A1) and two yet unreported proteins (DSC2 and MMRN2). Multivariate models based on these proteins achieved good accuracy for the diagnosis of MS compared with CIS (area under the receiver operating characteristics curve [AUROC] up to 80% using 3 proteins) and prognosis (NEDA vs. EDA; AUROC up to 96% at 2 and 5 years; using 5 proteins). These results, which will require further investigation to validate the new biomarkers, open new perspectives on multiple sclerosis pathophysiology and therapeutic targets.
Mehta, R.; Nambiar, P.; Kilbane, C.; Ghasia, F. F.; Shaikh, A. G.
Show abstract
Background: Visual dysfunction is a common but underrecognized contributor to disability in Parkinsons disease (PD), particularly deficits in binocular vision and vergence that impair reading, near work, and quality of life. The relationship between objective oculomotor abnormalities and patient-reported visual disability remains incompletely understood. Methods: We studied 25 individuals with PD and 11 age-matched controls who completed the National Eye Institute Visual Function Questionnaire 25 (VFQ25) and the Convergence Insufficiency Symptom Survey (CISS). Participants underwent comprehensive clinical ophthalmologic assessment and high resolution binocular eye tracking to quantify vergence latency, gain, fixation dynamics, and drift variability. Associations between objective measures and patient reported outcomes were examined, and predictive models were developed using clinic-only and combined clinical plus eye tracking approaches. Results: Compared with controls, PD participants demonstrated significantly worse VFQ25 composite scores and higher CISS scores, driven primarily by impairments in near activities and mental health. Clinically, PD was characterized by convergence insufficiency rather than generalized visual loss. Objective eye tracking revealed delayed vergence initiation, reduced gain, and increased instability. In PD, both clinical convergence measures (notably nearpoint convergence) and dynamic eye tracking metrics strongly correlated with VFQ25 and CISS scores, whereas such relationships were absent in controls. Predictive models showed limited performance using clinic measures alone, but improved with inclusion of eye racking variables. Conclusions: Visual disability in PD is tightly linked to convergence insufficiency and dynamic oculomotor instability. Simple clinical measures such as nearpoint convergence, augmented by eye tracking when available, provide meaningful insight into patient reported visual quality of life.
Saha, S.; Georgiou-Karistianis, N.; Teo, V.; Szmulewicz, D. J.; Strike, L. T.; Franca, M. C.; Rezende, T. J.; Harding, I. H.
Show abstract
Background Friedreich ataxia (FRDA) is a rare neurodegenerative disorder with substantial heterogeneity in clinical presentation and progression, complicating prognosis and trial design. Neuroimaging offers objective biomarkers to track disease evolution, yet variability in progression patterns remains poorly understood. Objective To identify biologically meaningful FRDA progression subtypes using longitudinal multimodal MRI and assess their associations with demographic, genetic, and clinical factors. Methods Longitudinal structural and diffusion MRI data from 54 FRDA and 57 controls were analysed. Annualised progression rates of macrostructural (volumetric) and microstructural (diffusion) features across cerebellum, brainstem, and spinal cord regions were clustered using Gaussian Mixture Models. Cluster robustness was assessed using per-cluster Jaccard similarity and other validation metrics. Random Forest classification examined predictors of cluster membership. Results Three reproducible clusters/subtypes emerged: micro-dominant/dual progression, characterised by widespread microstructural deterioration with modest volumetric decline; macro-dominant, marked by pronounced volumetric decline with minimal microstructural change; and minimal/no progression, showing negligible change in all measures. FRDA participants predominated in the first two clusters. Random Forest prediction of cluster membership using clinical and demographic variables identified length of the trinucleotide repeat expansion in the FXN gene as key predictor. Conclusions Data-driven clustering of longitudinal MRI identified distinct FRDA subtypes with unique co-progression patterns, underscoring genetic burden as a key driver. Recognising such heterogeneity can improve patient stratification, enable personalised monitoring, and guide targeted therapeutic strategies. Future studies should validate these subtypes in larger, more diverse cohorts and integrate additional biomarkers for enhanced precision.
Coupland, L. A.; Frost, S. A.; Lin, J.; Pham, N.; Suryana, E.; Self, M.; Chia, J.; Lam, T.; Liu, Z.; Jaich, R.; Crispin, P.; Rabbolini, D.; Law, R.; Keragala, C.; Medcalf, R.; Aneman, A.
Show abstract
Rationale: Fibrinolysis resistance in sepsis associates with thrombotic burden, multi-organ failure and death. The degrees and dynamics of resistance that associate with mortality in acute sepsis are unknown, and a simple tool to aid clinician interpretation of fibrinolysis measurements is lacking. Objectives: To establish a point of care grading tool of fibrinolysis resistance that aligns with scoring systems for disease acuity, is substantiated by plasma fibrinolysis markers and enables rapid investigation of the fibrinolysis state at the point of care. Methods: Prospective observational study of 116 adult sepsis/septic shock patients with sequential measurements of fibrinolysis resistance during Intensive Care Unit (ICU) admission using tissue plasminogen activator (tPA) enhanced viscoelastic testing (VET). The clot lysis time (TPA-LT) adjusted for fibrin clot amplitude (TPA-LT/FIBA10, sec/mm) underwent cluster analysis and was evaluated against disease severity scores, standard pathology, clinical outcomes and fibrinolysis markers. Measurements and Main Results: Three clusters of progressively increasing fibrinolysis resistance were identified (Grades 1-3). At admission, Grade 3 associated with the highest disease severity, organ failure, haematological and biochemical perturbations, fibrinolysis marker inhibitory profile and mortality (42% versus 24% and 15% in Grade 2 and Grade 1, respectively) with a 3.9-fold [95% CI 1.4-11] increased hazard ratio for death at 28 days compared to Grade 1. Transitions between grades were frequent over 7 days with a reduced Grade associated with decreased risk of death. Conclusions: Grading of fibrinolysis resistance in sepsis enables rapid identification of patients at greatest mortality risk with any dynamic improvement corresponding to favourable clinical outcomes.
Bovis, F.; Montobbio, N.; Signori, A.; Kalincik, T.; Arnold, D. L.; Tintore, M.; Kappos, L.; Sormani, M. P.
Show abstract
Disability worsening is the critical long-term outcome in multiple sclerosis, yet the Expanded Disability Status Scale incompletely captures neurological deterioration and has limited sensitivity in the short time windows of clinical trials. Composite endpoints incorporating functional measures have been proposed to address these limitations, but whether they reliably improve detection of treatment effects has not been established across trials. We conducted a post-hoc analysis of individual patient data from ten phase III randomised controlled trials (ASCEND, BRAVO, CONFIRM, DEFINE, EXPAND, INFORMS, OLYMPUS, OPERA I/II, and ORATORIO; n = 9,369), spanning relapsing-remitting and progressive multiple sclerosis. Confirmed disability worsening was defined using harmonised criteria with the msprog package and confirmed at 24 weeks. Treatment effects were estimated using Cox proportional hazards models and combined across trials in a one-stage individual patient data framework. Composite endpoints were constructed from the Expanded Disability Status Scale, the timed 25-foot walk test, and the nine-hole peg test using logical unions (OR-type), intersections (AND-type), and majority-vote structures. Sensitivity to treatment effect was quantified using Z-scores (the ratio of the pooled log-hazard ratio to its standard error) and compared to the Expanded Disability Status Scale reference using interaction tests. Event rates varied across components: the timed walk test generated the highest rates (up to 46.8%) while the nine-hole peg test generated the lowest (as low as 2.1%). OR-type composite endpoints showed weaker treatment effects than the Expanded Disability Status Scale alone, with the largest reductions in sensitivity observed for endpoints incorporating the timed walk test ({Delta}Z up to +2.26; interaction p = 0.004). These findings were confirmed across disease subtypes and were pronounced in relapsing-remitting trials, where no composite endpoint outperformed the Expanded Disability Status Scale. In progressive multiple sclerosis, the combination of the Expanded Disability Status Scale and the nine-hole peg test showed numerically stronger treatment effects ({Delta}Z = -1.65), though interaction tests did not reach statistical significance (p = 0.051). Composite endpoints do not systematically improve treatment effect detection in multiple sclerosis trials. Increased event capture driven by the timed walk test introduces noise that dilutes the treatment signal rather than amplifying it, highlighting that event rate and endpoint quality are not interchangeable. Upper limb function assessed by the nine-hole peg test provides complementary and specific information, particularly in progressive disease. The combination of global disability and upper limb measures represents a promising direction for future endpoint development in progressive multiple sclerosis trials, warranting validation.